Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Health Technol (Berl) ; 14(1): 1-14, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38229886

RESUMO

Purpose: This contribution explores the underuse of artificial intelligence (AI) in the health sector, what this means for practice, and how much the underuse can cost. Attention is drawn to the relevance of an issue that the European Parliament has outlined as a "major threat" in 2020. At its heart is the risk that research and development on trusted AI systems for medicine and digital health will pile up in lab centers without generating further practical relevance. Our analysis highlights why researchers, practitioners and especially policymakers, should pay attention to this phenomenon. Methods: The paper examines the ways in which governments and public agencies are addressing the underuse of AI. As governments and international organizations often acknowledge the limitations of their own initiatives, the contribution explores the causes of the current issues and suggests ways to improve initiatives for digital health. Results: Recommendations address the development of standards, models of regulatory governance, assessment of the opportunity costs of underuse of technology, and the urgency of the problem. Conclusions: The exponential pace of AI advances and innovations makes the risks of underuse of AI increasingly threatening.

2.
Minds Mach (Dordr) ; 30(3): 439-455, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32929305

RESUMO

The paper deals with the governance of Unmanned Aircraft Systems (UAS) in European law. Three different kinds of balance have been struck between multiple regulatory systems, in accordance with the sector of the governance of UAS which is taken into account. The first model regards the field of civil aviation law and its European Union (EU)'s regulation: the model looks like a traditional mix of top-down regulation and soft law. The second model concerns the EU general data protection law, the GDPR, which has set up a co-regulatory framework summed up with the principle of accountability also, but not only, in the field of drones. The third model of governance has been adopted by the EU through methods of legal experimentation and coordination mechanisms for UAS. The overall aim of the paper is to elucidate the ways in which such three models interact, insisting on differences and similarities with other technologies (e.g. self-driving cars), and further legal systems (e.g. the US).

3.
Int J Med Robot ; 15(1): e1968, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30397993

RESUMO

BACKGROUND: This paper aims to move the debate forward regarding the potential for artificial intelligence (AI) and autonomous robotic surgery with a particular focus on ethics, regulation and legal aspects (such as civil law, international law, tort law, liability, medical malpractice, privacy and product/device legislation, among other aspects). METHODS: We conducted an intensive literature search on current or emerging AI and autonomous technologies (eg, vehicles), military and medical technologies (eg, surgical robots), relevant frameworks and standards, cyber security/safety- and legal-systems worldwide. We provide a discussion on unique challenges for robotic surgery faced by proposals made for AI more generally (eg, Explainable AI) and machine learning more specifically (eg, black box), as well as recommendations for developing and improving relevant frameworks or standards. CONCLUSION: We classify responsibility into the following: (1) Accountability; (2) Liability; and (3) Culpability. All three aspects were addressed when discussing responsibility for AI and autonomous surgical robots, be these civil or military patients (however, these aspects may require revision in cases where robots become citizens). The component which produces the least clarity is Culpability, since it is unthinkable in the current state of technology. We envision that in the near future a surgical robot can learn and perform routine operative tasks that can then be supervised by a human surgeon. This represents a surgical parallel to autonomously driven vehicles. Here a human remains in the 'driving seat' as a 'doctor-in-the-loop' thereby safeguarding patients undergoing operations that are supported by surgical machines with autonomous capabilities.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Robóticos/ética , Procedimentos Cirúrgicos Robóticos/legislação & jurisprudência , Algoritmos , Segurança Computacional , Ética Médica , Europa (Continente) , Humanos , Erros Médicos , Estados Unidos
4.
Philos Trans A Math Phys Eng Sci ; 376(2133)2018 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-30323004

RESUMO

Scholars have increasingly discussed the legal status(es) of robots and artificial intelligence (AI) systems over the past three decades; however, the 2017 resolution of the EU parliament on the 'electronic personhood' of AI robots has reignited and even made current debate ideological. Against this background, the aim of the paper is twofold. First, the intent is to show how often today's discussion on the legal status(es) of AI systems leads to different kinds of misunderstanding that regard both the legal personhood of AI robots and their status as accountable agents establishing rights and obligations in contracts and business law. Second, the paper claims that whether or not the legal status of AI systems as accountable agents in civil--as opposed to criminal--law may make sense is an empirical issue, which should not be 'politicized'. Rather, a pragmatic approach seems preferable, as shown by methods of competitive federalism and legal experimentation. In the light of the classical distinction between primary rules and secondary rules of the law, examples of competitive federalism and legal experimentation aim to show how the secondary rules of the law can help us understanding what kind of primary rules we may wish for our AI robots.This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'.

5.
Minds Mach (Dordr) ; 28(4): 689-707, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30930541

RESUMO

This article reports the findings of AI4People, an Atomium-EISMD initiative designed to lay the foundations for a "Good AI Society". We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations-to assess, to develop, to incentivise, and to support good AI-which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...